15 research outputs found
Model Uncertainty based Active Learning on Tabular Data using Boosted Trees
Supervised machine learning relies on the availability of good labelled data
for model training. Labelled data is acquired by human annotation, which is a
cumbersome and costly process, often requiring subject matter experts. Active
learning is a sub-field of machine learning which helps in obtaining the
labelled data efficiently by selecting the most valuable data instances for
model training and querying the labels only for those instances from the human
annotator. Recently, a lot of research has been done in the field of active
learning, especially for deep neural network based models. Although deep
learning shines when dealing with image\textual\multimodal data, gradient
boosting methods still tend to achieve much better results on tabular data. In
this work, we explore active learning for tabular data using boosted trees.
Uncertainty based sampling in active learning is the most commonly used
querying strategy, wherein the labels of those instances are sequentially
queried for which the current model prediction is maximally uncertain. Entropy
is often the choice for measuring uncertainty. However, entropy is not exactly
a measure of model uncertainty. Although there has been a lot of work in deep
learning for measuring model uncertainty and employing it in active learning,
it is yet to be explored for non-neural network models. To this end, we explore
the effectiveness of boosted trees based model uncertainty methods in active
learning. Leveraging this model uncertainty, we propose an uncertainty based
sampling in active learning for regression tasks on tabular data. Additionally,
we also propose a novel cost-effective active learning method for regression
tasks along with an improved cost-effective active learning method for
classification tasks
Constrained Monotonic Neural Networks
Wider adoption of neural networks in many critical domains such as finance
and healthcare is being hindered by the need to explain their predictions and
to impose additional constraints on them. Monotonicity constraint is one of the
most requested properties in real-world scenarios and is the focus of this
paper. One of the oldest ways to construct a monotonic fully connected neural
network is to constrain signs on its weights. Unfortunately, this construction
does not work with popular non-saturated activation functions as it can only
approximate convex functions. We show this shortcoming can be fixed by
constructing two additional activation functions from a typical unsaturated
monotonic activation function and employing each of them on the part of
neurons. Our experiments show this approach of building monotonic neural
networks has better accuracy when compared to other state-of-the-art methods,
while being the simplest one in the sense of having the least number of
parameters, and not requiring any modifications to the learning procedure or
post-learning steps. Finally, we prove it can approximate any continuous
monotone function on a compact subset of
Restoration of Neonatal Retinal Images
Retinopathy of prematurity (ROP) is an eye disorder primarily affecting premature neonates. Specialists use a number of neonatal retinal images acquired by a wide field of view camera for diagnosis and the subsequent follow up. However, the premature infants’ retinal images are generally of lower visibility compared to adult retinal images, affecting the quality of diagnosis. We study some image dehazing methods from general outdoor scenes and propose an image restoration scheme for neonatal retinal images, based on the physical model of light propagation in a medium. The results from our restoration algorithm is useful for analysis by human experts as well as computer aided diagnosis and specifically we show that our method enhances vessel segmentation significantly compared to traditional methods like adaptive histogram equalization